62 research outputs found

    Bayesian Inference for partially observed SDEs Driven by Fractional Brownian Motion

    Full text link
    We consider continuous-time diffusion models driven by fractional Brownian motion. Observations are assumed to possess a non-trivial likelihood given the latent path. Due to the non-Markovianity and high-dimensionality of the latent paths, estimating posterior expectations is a computationally challenging undertaking. We present a reparameterization framework based on the Davies and Harte method for sampling stationary Gaussian processes and use this framework to construct a Markov chain Monte Carlo algorithm that allows computationally efficient Bayesian inference. The Markov chain Monte Carlo algorithm is based on a version of hybrid Monte Carlo that delivers increased efficiency when applied on the high-dimensional latent variables arising in this context. We specify the methodology on a stochastic volatility model allowing for memory in the volatility increments through a fractional specification. The methodology is illustrated on simulated data and on the S&P500/VIX time series and is shown to be effective. Contrary to a long range dependence attribute of such models often assumed in the literature, with Hurst parameter larger than 1/2, the posterior distribution favours values smaller than 1/2, pointing towards medium range dependence

    A 4D-Var Method with Flow-Dependent Background Covariances for the Shallow-Water Equations

    Get PDF
    The 4D-Var method for filtering partially observed nonlinear chaotic dynamical systems consists of finding the maximum a-posteriori (MAP) estimator of the initial condition of the system given observations over a time window, and propagating it forward to the current time via the model dynamics. This method forms the basis of most currently operational weather forecasting systems. In practice the optimization becomes infeasible if the time window is too long due to the non-convexity of the cost function, the effect of model errors, and the limited precision of the ODE solvers. Hence the window has to be kept sufficiently short, and the observations in the previous windows can be taken into account via a Gaussian background (prior) distribution. The choice of the background covariance matrix is an important question that has received much attention in the literature. In this paper, we define the background covariances in a principled manner, based on observations in the previous bb assimilation windows, for a parameter b≥1b\ge 1. The method is at most bb times more computationally expensive than using fixed background covariances, requires little tuning, and greatly improves the accuracy of 4D-Var. As a concrete example, we focus on the shallow-water equations. The proposed method is compared against state-of-the-art approaches in data assimilation and is shown to perform favourably on simulated data. We also illustrate our approach on data from the recent tsunami of 2011 in Fukushima, Japan.Comment: 32 pages, 5 figure

    Online Smoothing for Diffusion Processes Observed with Noise

    Full text link
    We introduce a methodology for online estimation of smoothing expectations for a class of additive functionals, in the context of a rich family of diffusion processes (that may include jumps) -- observed at discrete-time instances. We overcome the unavailability of the transition density of the underlying SDE by working on the augmented pathspace. The new method can be applied, for instance, to carry out online parameter inference for the designated class of models. Algorithms defined on the infinite-dimensional pathspace have been developed in the last years mainly in the context of MCMC techniques. There, the main benefit is the achievement of mesh-free mixing times for the practical time-discretised algorithm used on a PC. Our own methodology sets up the framework for infinite-dimensional online filtering -- an important positive practical consequence is the construct of estimates with the variance that does not increase with decreasing mesh-size. Besides regularity conditions, our method is, in principle, applicable under the weak assumption -- relatively to restrictive conditions often required in the MCMC or filtering literature of methods defined on pathspace -- that the SDE covariance matrix is invertible

    Monte Carlo Co-Ordinate Ascent Variational Inference

    Get PDF
    In Variational Inference (VI), coordinate-ascent and gradient-based approaches are two major types of algorithms for approximating difficult-to-compute probability densities. In real-world implementations of complex models, Monte Carlo methods are widely used to estimate expectations in coordinate-ascent approaches and gradients in derivative-driven ones. We discuss a Monte Carlo Co-ordinate Ascent VI (MC-CAVI) algorithm that makes use of Markov chain Monte Carlo (MCMC) methods in the calculation of expectations required within Co-ordinate Ascent VI (CAVI). We show that, under regularity conditions, an MC-CAVI recursion will get arbitrarily close to a maximiser of the evidence lower bound (ELBO) with any given high probability. In numerical examples, the performance of MC-CAVI algorithm is compared with that of MCMC and -- as a representative of derivative-based VI methods -- of Black Box VI (BBVI). We discuss and demonstrate MC-CAVI's suitability for models with hard constraints in simulated and real examples. We compare MC-CAVI's performance with that of MCMC in an important complex model used in Nuclear Magnetic Resonance (NMR) spectroscopy data analysis -- BBVI is nearly impossible to be employed in this setting due to the hard constraints involved in the model
    • …
    corecore